5 research outputs found

    Generalizations of the Hamming Associative Memory

    Full text link
    This Letter reviews four models of associative memory which generalize the operation of the Hamming associative memory: the grounded Hamming memory, the cellular Hamming memory, the decoupled Hamming memory, and the two-level decoupled Hamming memory. These memory models offer high performance and allow for a more practical hardware realization than the Hamming net and other fully interconnected neural net architectures.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/45397/1/11063_2004_Article_319505.pd

    Nonparametric Approaches for Estimating Driver Pose

    No full text

    Generating word embeddings from an extreme learning machine for sentiment analysis and sequence labeling tasks

    No full text
    Word Embeddings are low-dimensional distributed representations that encompass a set of language modeling and feature learning techniques from Natural Language Processing (NLP). Words or phrases from the vocabulary are mapped to vectors of real numbers in a low-dimensional space. In previous work, we proposed using an Extreme Learning Machine (ELM) for generating word embeddings. In this research, we apply the ELM-based Word Embeddings to the NLP task of Text Categorization, specifically Sentiment Analysis and Sequence Labeling. The ELM-based Word Embeddings utilizes a count-based approach similar to the Global Vectors (GloVe) model, where the word-context matrix is computed then matrix factorization is applied. A comparative study is done with Word2Vec and GloVe, which are the two popular state-of-the-art models. The results show that ELM-based Word Embeddings slightly outperforms the aforementioned two methods in the Sentiment Analysis and Sequence Labeling tasks.In addition, only one hyperparameter is needed using ELM whereas several are utilized for the other methods. ELM-based Word Embeddings are comparable to the state-of-the-art methods: Word2Vec and GloVe models. In addition, the count-based ELM model have word similarities to both the count-based GloVe and the predict-based Word2Vec models, with subtle differences
    corecore